43 research outputs found
The Safe and Effective Clinical Deployment of Artificial Intelligence Tools
18 million new cancer cases are diagnosed each year. Roughly half of these patients will be treated with radiation therapy, a complex technique that requires an interdisciplinary team of clinical staff and expensive equipment to be delivered safely. Cancer centers in Low- and Middle-Income Countries (LMIC) have an especially difficult time meeting the demands of radiation therapy as the complexity of treatment techniques increase, with only 37% of patients in these regions having access to the care they need. Artificial Intelligence (AI) based tools are being developed to simplify the treatment planning and quality assurance processes to increase the number of patients who can be treated, as well as improving the quality of their treatment plans. While AI techniques have shown great promise, with any new technology it is important to not only assess the potential benefits, but also the associated risk. To this end, we have performed a risk assessment of our in-house automated treatment planning system, the Radiation Planning Assistant, to identify points of risk and subsequently develop appropriate quality assurance and training resources to minimize patient risk.
To identify points of risk, a failure mode and effects analysis was performed by a multidisciplinary team of clinicians and software developers. Changes were then made to limit the risk of 76% of high-risk failures. These risk points were then incorporated into hazard testing, and we found that 62% of errors could be detected before a plan was created in the RPA. The user interface was then modified to limit the number of errors that will be propagated into the automatic planning process.
Following the changes made to optimize the safety of the user interface, the efficacy of error detection during the plan review process was assessed. A custom checklist was developed to guide the review of automatically generated treatment plans, based on the results of our FMEA and AAPM TG-275. During final physics plan checks, when utilizing the customized checklist, we found an increase in the rate of error detection by 20% for physicists and 17% for medical physics residents.
An end-to-end test was then performed to evaluate the entirety of the RPA training and deployment procedure for new users. Users were asked to review training materials and generate 10 treatment plans, including all treatment sites available in the RPA. Following training, 100% of the errors present in these plans were detected and users reported that the developed training materials provided them with all information needed to generate safe, high-quality, treatment plans.
Finally, a real-time contour monitoring system was developed to limit the risk of systematic errors and detect abnormalities in the contouring process that could be attributed to software error, off-label use, or automation bias.
In conclusion, we have optimized the safety and efficacy of the RPA training, quality assurance, and deployment processes. This evaluation has allowed us to not only maximize the impact of our automated treatment planning tool, the RPA, but has also generated results that should be used to inform the development of safe AI software and clinical deployment procedures, in future clinical environments
Recommended from our members
Partnering with health system operations leadership to develop a controlled implementation trial
Background: Outcome for mental health conditions is suboptimal, and care is fragmented. Evidence from controlled trials indicates that collaborative chronic care models (CCMs) can improve outcomes in a broad array of mental health conditions. US Department of Veterans Affairs leadership launched a nationwide initiative to establish multidisciplinary teams in general mental health clinics in all medical centers. As part of this effort, leadership partnered with implementation researchers to develop a program evaluation protocol to provide rigorous scientific data to address two implementation questions: (1) Can evidence-based CCMs be successfully implemented using existing staff in general mental health clinics supported by internal and external implementation facilitation? (2) What is the impact of CCM implementation efforts on patient health status and perceptions of care? Methods/design Health system operation leaders and researchers partnered in an iterative process to design a protocol that balances operational priorities, scientific rigor, and feasibility. Joint design decisions addressed identification of study sites, patient population of interest, intervention design, and outcome assessment and analysis. Nine sites have been enrolled in the intervention-implementation hybrid type III stepped-wedge design. Using balanced randomization, sites have been assigned to receive implementation support in one of three waves beginning at 4-month intervals, with support lasting 12 months. Implementation support consists of US Center for Disease Control’s Replicating Effective Programs strategy supplemented by external and internal implementation facilitation support and is compared to dissemination of materials plus technical assistance conference calls. Formative evaluation focuses on the recipients, context, innovation, and facilitation process. Summative evaluation combines quantitative and qualitative outcomes. Quantitative CCM fidelity measures (at the site level) plus health outcome measures (at the patient level; n = 765) are collected in a repeated measures design and analyzed with general linear modeling. Qualitative data from provider interviews at baseline and 1 year elaborate CCM fidelity data and provide insights into barriers and facilitators of implementation. Discussion Conducting a jointly designed, highly controlled protocol in the context of health system operational priorities increases the likelihood that time-sensitive questions of operational importance will be answered rigorously and that the outcomes will result in sustainable change in the health-care system. Trial registration NCT02543840 (https://www.clinicaltrials.gov/ct2/show/NCT02543840)